It seems to me that your opinion is that Eliezer was the driving force behind SIAI and that all of the other people involved, and who have donated to it, are basically “followers”. This is my inference based on the fact that so many of your arguments have to do with Eliezer’s credentials, e.g.
I would take you far more seriously if you spent 1 week of your time to get into first 5 on a marathon contest on TopCoder
I am skeptical of this reasoning. It seems to me that the views of people like Nick Bostrom and Anna Salamon should also be considered evidence in favor of the FOOM hypothesis, if this is the angle we are evaluating the question from. Surely their beliefs are not completely independent; but surely they are not completely dependent, either.
Eliezer was the driving force behind SIAI and that all of the other people involved, and who have donated to it, are basically “followers”.
No one sane would argue with this statement. Proof: imagine the fate of SIAI if EY suddenly quits (say, for personal reasons). Or don’t imagine, ask Anna and Luke what they would do in this case.
It’s clear that Eliezer has been the driving force behind SIAI existing as an organization. He founded it, his writings have been its most visible and influential face, he wants to organize an FAI team of which he would be a member, and so forth. The Singularity Summit is basically independent of him, and he was not particularly involved in the Visiting Fellows and rationality camp events that have occurred so far, proceeded mostly without him), but “driving force” remains very fair.
However, a number of SIAI folk like Michael Vassar and myself were independently interested in AI risk (and benefit) before coming in contact with Eliezer or his work, and would have likely continued to pursue other paths to affect this area sans EY. I think that this is important for the underlying question about independence of beliefs, and the two were bundled together.
Also, Nick Bostrom’s work has been influential for a number of people, especially his papers on the ethics of astronomical waste and superintelligence.
I’m well acquainted with both Anna and Luke (mainly due to geographic accident—I used to attend UC Berkeley), and I’m pretty sure they would still work on rationality and AI if Eliezer quit for personal reasons.
It’s interesting to see the difference between the rationalist culture online and in real life. No one in real life, to my knowledge, who is actually acquainted with the people behind SI has quite the attitude you do. Reminds me of this letter I read recently:
FWIW, I think that there are probably aspects of less wrong culture that could be improved; I’d like to see it become more egalitarian, with less voting down.
To my knowledge, nobody has ever complained about excess noise relative to signal on Less Wrong, which indicates to me that we can ease up on the moderation somewhat, e.g. by restricting downvote velocity heavily. Hopefully we could buy additional diversity of opinions at a very low cost of boring and low-quality posts.
It’s interesting to see the difference between the rationalist culture online and in real life. No one in real life, to my knowledge, who is actually acquainted with the people behind SI has quite the attitude you do.
Yeah, people often come across very differently online and IRL, not much can be done about it. I suppose that some restraints keeping people friendlier in person are absent when all one sees is a string of text.
As for voting down, I prefer it as is (though I would like to encourage downvoters to explain why. I try to do it when I downvote, anyhow.)
To my knowledge, nobody has ever complained about excess noise relative to signal on Less Wrong, which indicates to me that we can ease up on the moderation somewhat, e.g. by restricting downvote velocity heavily.
I prefer it this way. Maybe somewhat less moderation would be better, but I think we will get there gradually without announcing such intent. I am afraid that trying to intentionally reduce moderation could easily lead to too noisy site, and the change back would be painful, because it would create conflicts.
It seems to me that web communities tend to become less moderated as the time goes on. I have also seen a few communities with explicit rules which were intentionally broken and then the offenders complained loudly about censorship and created huge mindkilling debates; and I fear that the debate about voting without explicit rules would be even worse—people accusing other people of censorship by downvoting, unproved accusations of karma assassination, well-meaning people upvoting worthless content just to provide “freedom” and “balance” against the supposed censors and thus completely ruining the feedback system. Maybe the LW community would handle it with greater rationality, but to me the danger is not worth risking.
Maybe the LW community would handle it with greater rationality, but to me the danger is not worth risking.
I am not confident that’s true of the current community (call it <10%). I am still less confident (call it <1%) that it would be true of the community that would replace the current community, should those changes be made.
Paging Anna and Luke. I expect that they would stick with it, assuming they still had enough funding. Organizations are designed to be robust to individuals.
I will rephrase the statement a little bit, and emphasize a distinction between “SIAI” and some “AI ethics-oriented institution”. We can’t re-run history, but there are enough people interested in these topics that I’d expect some non-profit devoted to this cause would have sprung into existence, at some point 2003 − 2017, even if Eliezer had taken a different path. Is there any way we can test this?
Organizations are designed to be robust to individuals.
Only if such design work has actually taken place … or if the organizations you’re sampling have already been subjected to selection on this basis. It isn’t magically so.
I expect that they would stick with it, assuming they still had enough funding.
The funding is the key—assuming they can keep their major donor or acquire sufficient elsewhere losing Eliezer is hardly a dealbreaker. It’s just a HR problem and Luke would solve it. It may take the better part of a decade to get someone trained to Eliezer’s level of expertise but it’s something that could be done concurrently with everything else (for example, concurrently with all the other Eliezer-replacements that were in training at the same time.)
It seems to me that your opinion is that Eliezer was the driving force behind SIAI and that all of the other people involved, and who have donated to it, are basically “followers”. This is my inference based on the fact that so many of your arguments have to do with Eliezer’s credentials, e.g.
I am skeptical of this reasoning. It seems to me that the views of people like Nick Bostrom and Anna Salamon should also be considered evidence in favor of the FOOM hypothesis, if this is the angle we are evaluating the question from. Surely their beliefs are not completely independent; but surely they are not completely dependent, either.
No one sane would argue with this statement. Proof: imagine the fate of SIAI if EY suddenly quits (say, for personal reasons). Or don’t imagine, ask Anna and Luke what they would do in this case.
It’s clear that Eliezer has been the driving force behind SIAI existing as an organization. He founded it, his writings have been its most visible and influential face, he wants to organize an FAI team of which he would be a member, and so forth. The Singularity Summit is basically independent of him, and he was not particularly involved in the Visiting Fellows and rationality camp events that have occurred so far, proceeded mostly without him), but “driving force” remains very fair.
However, a number of SIAI folk like Michael Vassar and myself were independently interested in AI risk (and benefit) before coming in contact with Eliezer or his work, and would have likely continued to pursue other paths to affect this area sans EY. I think that this is important for the underlying question about independence of beliefs, and the two were bundled together.
Also, Nick Bostrom’s work has been influential for a number of people, especially his papers on the ethics of astronomical waste and superintelligence.
EY founded it. Everyone else is self selected for joining (as you yourself explained), and represents extreme outliers as far as I can tell.
I’m well acquainted with both Anna and Luke (mainly due to geographic accident—I used to attend UC Berkeley), and I’m pretty sure they would still work on rationality and AI if Eliezer quit for personal reasons.
It’s interesting to see the difference between the rationalist culture online and in real life. No one in real life, to my knowledge, who is actually acquainted with the people behind SI has quite the attitude you do. Reminds me of this letter I read recently:
http://www.lettersofnote.com/2012/03/i-am-very-real.html
FWIW, I think that there are probably aspects of less wrong culture that could be improved; I’d like to see it become more egalitarian, with less voting down.
To my knowledge, nobody has ever complained about excess noise relative to signal on Less Wrong, which indicates to me that we can ease up on the moderation somewhat, e.g. by restricting downvote velocity heavily. Hopefully we could buy additional diversity of opinions at a very low cost of boring and low-quality posts.
Yeah, people often come across very differently online and IRL, not much can be done about it. I suppose that some restraints keeping people friendlier in person are absent when all one sees is a string of text.
As for voting down, I prefer it as is (though I would like to encourage downvoters to explain why. I try to do it when I downvote, anyhow.)
I prefer it this way. Maybe somewhat less moderation would be better, but I think we will get there gradually without announcing such intent. I am afraid that trying to intentionally reduce moderation could easily lead to too noisy site, and the change back would be painful, because it would create conflicts.
It seems to me that web communities tend to become less moderated as the time goes on. I have also seen a few communities with explicit rules which were intentionally broken and then the offenders complained loudly about censorship and created huge mindkilling debates; and I fear that the debate about voting without explicit rules would be even worse—people accusing other people of censorship by downvoting, unproved accusations of karma assassination, well-meaning people upvoting worthless content just to provide “freedom” and “balance” against the supposed censors and thus completely ruining the feedback system. Maybe the LW community would handle it with greater rationality, but to me the danger is not worth risking.
I am not confident that’s true of the current community (call it <10%). I am still less confident (call it <1%) that it would be true of the community that would replace the current community, should those changes be made.
Paging Anna and Luke. I expect that they would stick with it, assuming they still had enough funding. Organizations are designed to be robust to individuals.
I will rephrase the statement a little bit, and emphasize a distinction between “SIAI” and some “AI ethics-oriented institution”. We can’t re-run history, but there are enough people interested in these topics that I’d expect some non-profit devoted to this cause would have sprung into existence, at some point 2003 − 2017, even if Eliezer had taken a different path. Is there any way we can test this?
Only if such design work has actually taken place … or if the organizations you’re sampling have already been subjected to selection on this basis. It isn’t magically so.
I should have said “typically.”
The funding is the key—assuming they can keep their major donor or acquire sufficient elsewhere losing Eliezer is hardly a dealbreaker. It’s just a HR problem and Luke would solve it. It may take the better part of a decade to get someone trained to Eliezer’s level of expertise but it’s something that could be done concurrently with everything else (for example, concurrently with all the other Eliezer-replacements that were in training at the same time.)